4 research outputs found

    A framework for evaluating the quality of modelling languages in MDE environments

    Full text link
    This thesis presents the Multiple Modelling Quality Evaluation Framework method (hereinafter MMQEF), which is a conceptual, methodological, and technological framework for evaluating quality issues in modelling languages and modelling elements by the application of a taxonomic analysis. It derives some analytic procedures that support the detection of quality issues in model-driven projects, such as the suitability of modelling languages, traces between abstraction levels, specification for model transformations, and integration between modelling proposals. MMQEF also suggests metrics to perform analytic procedures based on the classification obtained for the modelling languages and artifacts under evaluation. MMQEF uses a taxonomy that is extracted from the Zachman framework for Information Systems (Zachman, 1987; Sowa and Zachman, 1992), which proposed a visual language to classify elements that are part of an Information System (IS). These elements can be from organizational to technical artifacts. The visual language contains a bi-dimensional matrix for classifying IS elements (generally expressed as models) and a set of seven rules to perform the classification. As an evaluation method, MMQEF defines activities in order to derive quality analytics based on the classification applied on modelling languages and elements. The Zachman framework was chosen because it was one of the first and most precise proposals for a reference architecture for IS, which is recognized by important standards such as the ISO 42010 (612, 2011). This thesis presents the conceptual foundation of the evaluation framework, which is based on the definition of quality for model-driven engineering (MDE). The methodological and technological support of MMQEF is also described. Finally, some validations for MMQEF are reported.Esta tesis presenta el m茅todo MMQEF (Multiple Modelling Quality Evaluation Framework), el cual es un marco de trabajo conceptual, metodol贸gico y tecnol贸gico para evaluar aspectos de calidad sobre lenguajes y elementos de modelado mediante la aplicaci贸n de an谩lisis taxon贸mico. El m茅todo deriva procedimientos anal铆ticos que soportan la detecci贸n de aspectos de calidad en proyectos model-driven tales como: idoneidad de lenguajes de modelado, trazabilidad entre niveles de abstracci贸n, especificaci贸n de transformaci贸n de modelos, e integraci贸n de propuestas de modelado. MMQEF tambi茅n sugiere m茅tricas para ejecutar procedimientos anal铆ticos basados en la clasificaci贸n obtenida para los lenguajes y artefactos de modelado bajo evaluaci贸n. MMQEF usa una taxonom铆a para Sistemas de Informaci贸n basada en el framework Zachman (Zachman, 1987; Sowa and Zachman, 1992). Dicha taxonom铆a propone un lenguaje visual para clasificar elementos que hacen parte de un Sistema de Informaci贸n. Los elementos pueden ser artefactos asociados a niveles desde organizacionales hasta t茅cnicos. El lenguaje visual contiene una matriz bidimensional para clasificar elementos de Sistemas de Informaci贸n, y un conjunto de siete reglas para ejecutar la clasificaci贸n. Como m茅todo de evaluaci贸n MMEQF define actividades para derivar anal铆ticas de calidad basadas en la clasificaci贸n aplicada sobre lenguajes y elementos de modelado. El marco Zachman fue seleccionado debido a que 茅ste fue una de las primeras y m谩s precisas propuestas de arquitectura de referencia para Sistemas de Informaci贸n, siendo 茅sto reconocido por destacados est谩ndares como ISO 42010 (612, 2011). Esta tesis presenta los fundamentos conceptuales del m茅todo de evaluaci贸n basado en el an谩lisis de la definici贸n de calidad en la ingenier铆a dirigida por modelos (MDE). Posteriormente se describe el soporte metodol贸gico y tecnol贸gico de MMQEF, y finalmente se reportan validaciones.Aquesta tesi presenta el m猫tode MMQEF (Multiple Modelling Quality Evaluation Framework), el qual 茅s un marc de treball conceptual, metodol貌gic i tecnol貌gic per avaluar aspectes de qualitat sobre llenguatges i elements de modelatge mitjan莽ant l'aplicaci贸 d'an脿lisi taxon貌mic. El m猫tode deriva procediments anal铆tics que suporten la detecci贸 d'aspectes de qualitat en projectes model-driven com ara: idone茂tat de llenguatges de modelatge, tra莽abilitat entre nivells d'abstracci贸, especificaci贸 de transformaci贸 de models, i integraci贸 de propostes de modelatge. MMQEF tamb茅 suggereix m猫triques per executar procediments anal铆tics basats en la classificaci贸 obtinguda pels llenguatges i artefactes de mode-lat avaluats. MMQEF fa servir una taxonomia per a Sistemes d'Informaci贸 basada en el framework Zachman (Zachman, 1987; Sowa and Zachman, 1992). Aquesta taxonomia proposa un llenguatge visual per classificar elements que fan part d'un Sistema d'Informaci贸. Els elements poden ser artefactes associats a nivells des organitzacionals fins t猫cnics. El llenguatge visual cont茅 una matriu bidimensional per classificar elements de Sistemes d'Informaci贸, i un conjunt de set regles per executar la classificaci贸. Com a m猫tode d'avaluaci贸 MMEQF defineix activitats per derivar anal铆tiques de qualitat basades en la classificaci贸 aplicada sobre llenguatges i elements de modelatge. El marc Zachman va ser seleccionat a causa de que aquest va ser una de les primeres i m茅s precises propostes d'arquitectura de refer猫ncia per a Sistemes d'Informaci贸, sent aix貌 reconegut per destacats est脿ndards com ISO 42010 (612, 2011). Aquesta tesi presenta els fonaments conceptuals del m猫tode d'avaluaci贸 basat en l'an脿lisi de la definici贸 de qualitat en l'enginyeria dirigida per models (MDE). Posteriorment es descriu el suport metodol貌gic i tecnol貌gic de MMQEF, i finalment es reporten validacions.Giraldo Vel谩squez, FD. (2017). A framework for evaluating the quality of modelling languages in MDE environments [Tesis doctoral no publicada]. Universitat Polit猫cnica de Val猫ncia. https://doi.org/10.4995/Thesis/10251/90628TESI

    Evaluating the quality of a set of modelling languages used in combination: A method and a tool

    Full text link
    [EN] Modelling languages have proved to be an effective tool to specify and analyse various perspectives of enterprises and information systems. In addition to modelling language designs, works on model quality and modelling language quality evaluation have contributed to the maturity of the model-driven engineering (MDE) field. Although consolidated knowledge on quality evaluation is still relevant to this scenario, in previous works, we have identified misalignments between the topics that academia is addressing and the needs of industry in applying MDE, thus identifying some remaining challenges. In this paper, we focus on the need for a method to evaluate the quality of a set of modelling languages used in combination within a MDE environment. This paper presents MMQEF (Multiple Modelling language Quality Evaluation Framework), describing its foundations, presenting its method components and discussing its trade-offs. (C) 2018 Elsevier Ltd. All rights reserved.This work was supported by COLCIENCIAS (Colombia) (grant 512, 2010); the European Commision FP7 Project CaaS (611351).Giraldo-Vel谩squez, FD.; Espa帽a Cubillo, S.; Giraldo, WJ.; Pastor L贸pez, O. (2018). Evaluating the quality of a set of modelling languages used in combination: A method and a tool. Information Systems. 77:48-70. https://doi.org/10.1016/j.is.2018.06.002S48707

    Designing the Didactic Strategy Modeling Language (DSML) From PoN: An Activity Oriented EML Proposal

    Full text link
    [EN] This paper presents the design of the didactic strategy modeling language (DSML) according to the principles of Physics of Notations (PoN). The DSML is a visual and activity-oriented language for learning design characterized by the representation of different activities according to the nature of the task. Once the language is designed, a blind interpretation study is conducted to validate the semantic transparency of the learning activity iconography. The results of the paper allow to refine the icons. In addition to this, an authoring tool for DSML, which is integrated to an LMS, is presented. As a result, a model driven course was designed as a DSML pre-validation.Ruiz, A.; Panach Navarrete, JI.; Pastor L贸pez, O.; Giraldo-Vel谩squez, FD.; Arciniegas, JL.; Giraldo, WJ. (2018). Designing the Didactic Strategy Modeling Language (DSML) From PoN: An Activity Oriented EML Proposal. IEEE-RITA: Latin-American Learning Technologies Journal. 13(4):136-143. https://doi.org/10.1109/RITA.2018.2879262S13614313

    Integrating technical debt into MDE

    Full text link
    The main goal of this work is to evaluate the feasibility to calculate the technical debt (a traditional software quality approach) in a model-driven context through the same tools used by software deve- lopers at work. The SonarQube tool was used, so that the quality check was performed directly on projects created with Eclipse Modeling Frame- work (EMF) instead of traditionals source code projects. In this work, XML was used as the model speci cation language to verify in Sonar- Qube due to the creation of EMF metamodels in XMI (XML Metadata Interchange) and that SonarQube o ers a plugin to assess the XML lan- guage. After this, our work focused on the de nition of model rules as an XSD schema (XML Schema De nition) and the integration between EMF-SonarQube in order that these metrics were directly validated by SonarQube; and subsequently, this tool determined the technical debt that the analyzed EMF models could containF. G, thanks to Colciencias (Colombia) for funding this work through the Colciencias Grant call 512-2010. This work has been supported by the Spanish MICINN PROS-Req (TIN2010-19130-C02-02), the Generalitat Valenciana Project ORCA (PROMETEO/2009/015), the European Commission FP7 Project CaaS (611351), and ERDF structural funds.Giraldo Vel谩squez, FD.; Espa帽a Cubillo, S.; Pineda, MA.; Giraldo, WJ.; Pastor L贸pez, O. (2014). Integrating technical debt into MDE. CEUR Workshop Proceedings. http://hdl.handle.net/10251/68278
    corecore